large reasoning model

Terms from Artificial Intelligence: humans at the heart of algorithms

The glossary is being gradually proof checked, but currently has many typos and misspellings.

Large reasoning models (LRMs) build on large language models, but add elements that emulate aspects of thinking especially chain of thought reasoning and self-reflection. They substantially outperform LLMs on logical, mathematical and coding problems. However, research from Apple showed that LRMs (as available in 2025) tended to fail dramatically as problem complexity increased.